Modern machine learning pipelines are limited due to data availability, storage quotas, privacy regulations, and expensive annotation processes. These constraints make it difficult or impossible to maintain a large-scale model trained on growing annotation sets. Continual learning directly approaches this problem, with the ultimate goal of devising methods where a neural network effectively learns relevant patterns for new (unseen) classes without significantly altering its performance on previously learned ones. In this paper, we address the problem of continual learning for video data. We introduce PIVOT, a novel method that leverages the extensive knowledge in pre-trained models from the image domain, thereby reducing the number of trainable parameters and the associated forgetting. Unlike previous methods, ours is the first approach that effectively uses prompting mechanisms for continual learning without any in-domain pre-training. Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
translated by 谷歌翻译
课程学习是一种强大的培训方法,可以在某些情况下更快,更好的培训。但是,这种方法需要一个概念,即哪些示例很难且容易,这并不总是很容易提供。最近称为C得分的度量标准将其作为代理,例如,将其与学习一致性联系起来。不幸的是,这种方法是相当大的强化,从而限制了其对替代数据集的适用性。在这项工作中,我们通过不同的方法训练模型,以预测CIFAR-100和CIFAR-10的C得分。但是,我们发现这些模型在相同的分布和分布不足之内都概括了。这表明C分数不是由每个样本的个体特征定义的,而是由其他因素定义的。我们假设样本与其邻居的关系,尤其是其中有多少人共享相同的标签,可以帮助解释C分数。我们计划在未来的工作中探索这一点。
translated by 谷歌翻译
最近,很少拍摄的视频分类已经获得了越来越令人利益。目前的方法主要集中在有效利用视频中的时间维度,以在低数据制度下改善学习。然而,大多数作品在很大程度上忽略了视频通常伴随着丰富的文本描述,也可以是处理少量拍摄识别情况的重要信息来源。在本文中,我们建议利用这些人提供的文本描述作为培训几次视频分类模型时的特权信息。具体来说,我们制定了一种基于文本的任务调节器,以使视频功能适应几次拍摄的学习任务。此外,我们的模型遵循转换设置,通过使用支持文本描述和查询实例来更新一组类原型来提高模型的任务适应能力。我们的模型在四个具有挑战性的基准测试中实现了最先进的性能,通常用于评估少量拍摄视频动作分类模型。
translated by 谷歌翻译
当随着时间的推移学习任务时,人工神经网络遭受称为灾难性遗忘(CF)的问题。当在训练网络的训练过程中覆盖网络的权重,导致忘记旧信息的新任务时,会发生这种情况。为了解决这个问题,我们提出了META可重复使用的知识或标记,这是一种新的方法,可以在学习新任务时促进重量可重用性而不是覆盖。具体来说,标记在任务之间保留一组共享权重。我们将这些共享权重设定为共同的知识库(KB),不仅用于学习新任务,而且还富有以丰富的新知识,因为模型了解新任务。标记背后的关键组件是两倍。一方面,冶金学习方法提供了逐步丰富KB的关键机制,并在任务之间促进重量可重用性。另一方面,一组培训掩模提供了选择性地从KB相关权重中选择的关键机制来解决每个任务。通过使用Mark,我们实现了最普遍的基准,在几个流行的基准中实现了最新的基准,在20分拆性MiniimAgenet数据集上超过了平均精度的最佳性能方法,同时使用55%的数量来实现几乎零遗忘参数。此外,消融研究提供了证据,实际上,标记正在学习每个任务选择性地使用的可重复使用的知识。
translated by 谷歌翻译
每年医生对患者的基于形象的诊断需求越来越大,是最近的人工智能方法可以解决的问题。在这种情况下,我们在医学图像的自动报告领域进行了调查,重点是使用深神经网络的方法,了解:(1)数据集,(2)架构设计,(3)解释性和(4)评估指标。我们的调查确定了有趣的发展,也是留下挑战。其中,目前对生成的报告的评估尤为薄弱,因为它主要依赖于传统的自然语言处理(NLP)指标,这不准确地捕获医疗正确性。
translated by 谷歌翻译
In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs. While much of the literature has focused on discounted MDPs, robust average-reward MDPs remain largely unexplored. In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set. We first take an approach that approximates average-reward MDPs using discounted MDPs. We prove that the robust discounted value function converges to the robust average-reward as the discount factor $\gamma$ goes to $1$, and moreover, when $\gamma$ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward. We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum. Then, we investigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step. We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.
translated by 谷歌翻译
Masked Image Modelling (MIM) has been shown to be an efficient self-supervised learning (SSL) pre-training paradigm when paired with transformer architectures and in the presence of a large amount of unlabelled natural images. The combination of the difficulties in accessing and obtaining large amounts of labeled data and the availability of unlabelled data in the medical imaging domain makes MIM an interesting approach to advance deep learning (DL) applications based on 3D medical imaging data. Nevertheless, SSL and, in particular, MIM applications with medical imaging data are rather scarce and there is still uncertainty. around the potential of such a learning paradigm in the medical domain. We study MIM in the context of Prostate Cancer (PCa) lesion classification with T2 weighted (T2w) axial magnetic resonance imaging (MRI) data. In particular, we explore the effect of using MIM when coupled with convolutional neural networks (CNNs) under different conditions such as different masking strategies, obtaining better results in terms of AUC than other pre-training strategies like ImageNet weight initialization.
translated by 谷歌翻译
We introduce a machine-learning (ML)-based weather simulator--called "GraphCast"--which outperforms the most accurate deterministic operational medium-range weather forecasting system in the world, as well as all previous ML baselines. GraphCast is an autoregressive model, based on graph neural networks and a novel high-resolution multi-scale mesh representation, which we trained on historical weather data from the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 reanalysis archive. It can make 10-day forecasts, at 6-hour time intervals, of five surface variables and six atmospheric variables, each at 37 vertical pressure levels, on a 0.25-degree latitude-longitude grid, which corresponds to roughly 25 x 25 kilometer resolution at the equator. Our results show GraphCast is more accurate than ECMWF's deterministic operational forecasting system, HRES, on 90.0% of the 2760 variable and lead time combinations we evaluated. GraphCast also outperforms the most accurate previous ML-based weather forecasting model on 99.2% of the 252 targets it reported. GraphCast can generate a 10-day forecast (35 gigabytes of data) in under 60 seconds on Cloud TPU v4 hardware. Unlike traditional forecasting methods, ML-based forecasting scales well with data: by training on bigger, higher quality, and more recent data, the skill of the forecasts can improve. Together these results represent a key step forward in complementing and improving weather modeling with ML, open new opportunities for fast, accurate forecasting, and help realize the promise of ML-based simulation in the physical sciences.
translated by 谷歌翻译
Simulating rigid collisions among arbitrary shapes is notoriously difficult due to complex geometry and the strong non-linearity of the interactions. While graph neural network (GNN)-based models are effective at learning to simulate complex physical dynamics, such as fluids, cloth and articulated bodies, they have been less effective and efficient on rigid-body physics, except with very simple shapes. Existing methods that model collisions through the meshes' nodes are often inaccurate because they struggle when collisions occur on faces far from nodes. Alternative approaches that represent the geometry densely with many particles are prohibitively expensive for complex shapes. Here we introduce the Face Interaction Graph Network (FIGNet) which extends beyond GNN-based methods, and computes interactions between mesh faces, rather than nodes. Compared to learned node- and particle-based methods, FIGNet is around 4x more accurate in simulating complex shape interactions, while also 8x more computationally efficient on sparse, rigid meshes. Moreover, FIGNet can learn frictional dynamics directly from real-world data, and can be more accurate than analytical solvers given modest amounts of training data. FIGNet represents a key step forward in one of the few remaining physical domains which have seen little competition from learned simulators, and offers allied fields such as robotics, graphics and mechanical design a new tool for simulation and model-based planning.
translated by 谷歌翻译
基于连续的潜在空间(例如变异自动编码器)的概率模型可以理解为无数混合模型,其中组件连续取决于潜在代码。它们具有用于生成和概率建模的表达性工具,但与可牵引的概率推断不符,即计算代表概率分布的边际和条件。同时,可以将概率模型(例如概率电路(PC))理解为层次离散混合模型,从而使它们可以执行精确的推断,但是与连续的潜在空间模型相比,它们通常显示出低于标准的性能。在本文中,我们研究了一种混合方法,即具有较小潜在尺寸的可拖动模型的连续混合物。尽管这些模型在分析上是棘手的,但基于一组有限的集成点,它们非常适合数值集成方案。有足够数量的集成点,近似值变得精确。此外,使用一组有限的集成点,可以将近似方法编译成PC中,以“在近似模型中的精确推断”执行。在实验中,我们表明这种简单的方案被证明非常有效,因为PC在许多标准密度估计基准上以这种方式为可拖动模型设定了新的最新模型。
translated by 谷歌翻译